scheduler: preserve prompt checkpoints in chunked prefill resume path#221
scheduler: preserve prompt checkpoints in chunked prefill resume path#221krystophny wants to merge 2 commits intowaybarrios:mainfrom
Conversation
|
Critical review found a blocking scheduler issue: the chunked-prefill path still does not invoke |
|
This is the right approach. We had #194 open for the same crash (one-line unpack fix) but just closed it -- your PR is the only one of the four submissions that actually preserves checkpoint semantics through the chunked prefill lifecycle instead of silently discarding them. The checkpoint-aware finalization step (feeding remaining checkpoint tokens through the model before generation) is what the others all miss. Happy to test once the callback gap is resolved. We run chunked prefill on Qwen3.5-122B (M2 Ultra 128GB) with 2048-token prefill steps. |
The upstream BatchGenerator contract requires prompt_checkpoint_callback to fire after cache finalization, before the checkpoint tail model call. The chunked-prefill monkeypatch preserved the checkpoint field but never invoked the callback, breaking the upstream checkpoint contract. Wire _lazy_extract_cache from mlx-lm and invoke the callback at the correct semantic boundary. Add regression test verifying the callback fires with the correct uid and checkpoint offset.
Thump604
left a comment
There was a problem hiding this comment.
The PR correctly addresses the callback gap that was flagged. The fix preserves prompt_checkpoints through partial resume and invokes the callback after finalization, which the prior submissions missed. The semantic changes to checkpoint-aware bounds checking are sound.
However, I have three concrete concerns before this merges:
-
Test coverage is incomplete. test_chunked_prefill_accepts_prompt_checkpoints constructs a 7-tuple but never exercises the actual zip(*batch_prompts) unpack on line 414. test_chunked_prefill_invokes_checkpoint_callback doesn't validate the "checkpoint tail" model execution (lines 313-318), which is the most complex part of the fix. A test with checkpoint > 1 initially (not just in resume) would strengthen confidence.
-
Line 425 has a suspicious defensive formula:
(ln - pc if pc > 0 else -pc). Checkpoint offsets should always be positive. Why the negation when pc <= 0? This deserves a comment explaining the intent — is this dead code, or does mlx-lm ever produce zero/negative checkpoints? -
The checkpoint tail replay (lines 313-318) feeds tokens 0..checkpoint-1 through the model after finalize(). Is this the intended semantics? The comment on line 303 says "once only the checkpoint tail remains" but then we immediately replay it. If the cache already saw these tokens during chunked processing, feeding them again seems redundant. I need clarification: does finalize() clear cache state, or just mark it finalized? If it clears, the replay is necessary; if not, it may be redundant computation.
The implementation is not blocking — all three can be addressed post-merge. But they should be addressed before production use on 122B at 2048-token prefill steps. Otherwise this is good work fixing a real checkpoint contract violation.
|
The PR correctly addresses the callback gap that was flagged. The fix preserves prompt_checkpoints through partial resume and invokes the callback after finalization, which the prior submissions missed. The semantic changes to checkpoint-aware bounds checking are sound. However, I have three concrete concerns before this merges:
The implementation is not blocking. All three can be addressed post-merge. But they should be resolved before production use on 122B at 2048-token prefill steps. Otherwise this is good work fixing a real checkpoint contract violation that prior submissions missed. |
Summary
Preserve
prompt_checkpointsacross chunked-prefill partial resume and finalization formlx-lmprompt tuples, and invoke the upstreamprompt_checkpoint_callbackcontract.Why
Recent
mlx-lmprompt tuples carry a seventhprompt_checkpointsfield. Upstream one-line fixes cover the immediate unpack crash in_chunked_next, but the chunked-prefill monkeypatch also needs to retain that field when it stores partial progress and resumes generation later.Additionally, the upstream
BatchGeneratorinvokesprompt_checkpoint_callbackafter cache finalization. The chunked-prefill monkeypatch was missing this callback invocation, breaking the checkpoint contract.What changed
_lazy_extract_cachefrom mlx-lm and invokeprompt_checkpoint_callbackat the correct semantic boundary (afterc.finalize(), before the checkpoint tail model call)Files to review
vllm_mlx/scheduler.pytests/test_batching.pyRelated PRs
#194and#156, which address only the immediate unpack failure in_chunked_nextValidation